Probability maximization via Minkowski functionals: convex representations and tractable resolution

نویسندگان

چکیده

In this paper, we consider the maximizing of probability $${\mathbb {P}}\left\{ \, \zeta \mid \in {\mathbf {K}}({\mathbf {x}}) \right\} $$ over a closed and convex set $${\mathcal {X}}$$ , special case chance-constrained optimization problem. Suppose $${\mathbf \triangleq \left\{ {\mathcal {K}}\, c({\mathbf {x}},\zeta ) \ge 0 $$\zeta is uniformly distributed on compact {K}}$$ $$c({\mathbf )$$ defined as either )\, 1-\left| ^T{\mathbf {x}}\right| ^m$$ where $$m\ge 0$$ (Setting A) or T{\mathbf {x}}\, - B). We show that in setting, by leveraging recent findings context non-Gaussian integrals positively homogenous functions, \,\zeta can be expressed expectation suitably continuous function $$F(\bullet ,\xi with respect to an appropriately Gaussian density (or its variant), i.e. {E}}_{{{\tilde{p}}}} \left[ F({\mathbf {x}},\xi \right] . Aided observation analysis, then develop representation original problem requiring minimization $$g\left( {\mathbb {E}}\left[ F(\bullet \right) g smooth function. Traditional stochastic approximation schemes cannot contend )\right] $$\mathcal X$$ since conditionally unbiased sampled gradients are unavailable. regularized variance-reduced (r-VRSA) scheme obviates need for such unbiasedness combining iterative regularization variance-reduction. Notably, characterized almost-sure convergence guarantees, rate {O}(1/k^{1/2-a})$$ expected sub-optimality $$a > sample complexity {O}(1/\epsilon ^{6+\delta })$$ $$\delta To best our knowledge, may first maximization problems guarantees. Preliminary numerics portfolio selection set-covering B) suggest competes well naive mini-batch SA integer programming methods.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convex integer maximization via Graver bases

We present a new algebraic algorithmic scheme to solve convex integer maximization problems of the following form, where c is a convex function on R and w1x, . . . , wdx are linear forms on R, max {c(w1x, . . . , wdx) : Ax = b, x ∈ N} . This method works for arbitrary input data A, b, d, w1, . . . , wd, c. Moreover, for fixed d and several important classes of programs in variable dimension, we...

متن کامل

Stochastic Shortest Paths Via Quasi-convex Maximization

We consider the problem of finding shortest paths in a graph with independent randomly distributed edge lengths. Our goal is to maximize the probability that the path length does not exceed a given threshold value (deadline). We give a surprising exact n n) algorithm for the case of normally distributed edge lengths, which is based on quasi-convex maximization. We then prove average and smoothe...

متن کامل

Constructive Existence of Minkowski Functionals

In Bishop's constructive mathematics, the framework of this paper, there are many situations where we cannot easily prove the existence of functional whose existence is a trivial consequence of classical logic. One such functional is the Minkowski functional of a convex absorbing set. We shall prove the existence of Minkowski functionals in various spaces, and apply the theorems to establish th...

متن کامل

Holomorphic Mappings Preserving Minkowski Functionals

We show that the equality m1(f(x)) = m2(g(x)) for x in a neighborhood of a point a remains valid for all x provided that f and g are open holomorphic maps, f(a) = g(a) = 0 and m1, m2 are Minkowski functionals of bounded balanced domains. Moreover, a polynomial relation between f and g is obtained. Next we generalize these results to bounded quasi-balanced domains. Moreover, the main results of ...

متن کامل

Learning Maximal Margin Markov Networks via Tractable Convex Optimization

Learning of Markov networks constitutes a challenging optimization problem. Even the predictive step of a general Markov network involves solving an NP-complete max-sum problem. By using the discriminative approach, learning of the Markov networks from noisy examples can be transformed to a convex quadratic program with intractably large number of linear constraints. The intractable quadratic p...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematical Programming

سال: 2022

ISSN: ['0025-5610', '1436-4646']

DOI: https://doi.org/10.1007/s10107-022-01859-8